Interactivity¶
The visualizations consume the Interpret API, and is responsible for both displaying explanations and the underlying rendering infrastructure.
Visualizing with the show method¶
Interpret exposes a top-level method show, of which acts as the surface for rendering explanation visualizations. This can produce either a drop down widget or dashboard depending on what’s provided.
Show a single explanation¶
For basic use cases, it is good to show an explanation one at a time. The rendered widget will provide a dropdown to select between visualizations. For example, in the event of a global explanation, it will provide an overview, along with graphs for each feature as shown with the code below:
import pandas as pd
from sklearn.model_selection import train_test_split
from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show
df = pd.read_csv(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
header=None)
df.columns = [
"Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
"MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
"CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
df = df.sample(frac=0.05)
train_cols = df.columns[0:-1]
label = df.columns[-1]
X = df[train_cols]
y = df[label]
seed = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)
ebm = ExplainableBoostingClassifier(random_state=seed)
ebm.fit(X_train, y_train)
ebm_global = ebm.explain_global()
show(ebm_global)
Show a specific visualization within an explanation¶
Let’s say you are after one specific visualization within an explanation, then you can specify it with a key as the subsequent function argument.
show(ebm_global, "Age")
Show multiple explanations for comparison¶
If you running in a local environment (such as a running Python on your laptop), then show can expose a dashboard for comparison which can be invoked the in the following way (provide a list of explanations in the first argument):
from interpret.glassbox import LogisticRegression, ClassificationTree
# We have to transform categorical variables to use Logistic Regression and Decision Tree
X_enc = pd.get_dummies(X, prefix_sep='.')
feature_names = list(X_enc.columns)
X_train_enc, X_test_enc, y_train, y_test = train_test_split(X_enc, y, test_size=0.20, random_state=seed)
lr = LogisticRegression(random_state=seed, feature_names=feature_names, penalty='l1', solver='liblinear')
lr.fit(X_train_enc, y_train)
lr_global = lr.explain_global()
show([ebm_global, lr_global])
Interpret API¶
The API is responsible for standardizing ML interpretability explainers and explanations, providing a consistent interface for both users and developers. To support this, it also provides foundational top-level methods that support visualization and data access.
Explainers are glassbox or blackbox algorithms that will produce an explanation, an artifact that is ready for visualizations or further data processing.
Explainer¶
An explainer will produce an explanation from its .explain_* method. These explanations normally provide an understanding of global model behavior or local individual predictions (.explain_global and .explain_local respectively).
-
class
interpret.api.base.ExplainerMixin¶ - An object that computes explanations.
This is a contract required for InterpretML.
- Variables
available_explanations – A list of strings subsetting the following - “perf”, “data”, “local”, “global”.
explainer_type – A string that is one of the following - “blackbox”, “model”, “specific”, “data”, “perf”.
Explanation¶
An explanation is a self-contained object that help understands either its target model behavior, or a set of individual predictions. The explanation should provide access to visualizations through .visualize, and data processing the .data method. Both .visualize and .data should share the same function signature in terms of arguments.
-
class
interpret.api.base.ExplanationMixin¶ - The result of calling explain_* from an Explainer. Responsible for providing data and/or visualization.
This is a contract required for InterpretML.
- Variables
explanation_type – A string that is one of the explainer’s available explanations. Should be one of “perf”, “data”, “local”, “global”.
name – A string that denotes the name of the explanation for display purposes.
selector – An optional dataframe that describes the data. Each row of the dataframe corresponds with a respective data item.
-
abstract
data(key=None)¶ Provides specific explanation data.
- Parameters
key – A number/string that references a specific data item.
- Returns
A serializable dictionary.
-
abstract
visualize(key=None)¶ Provides interactive visualizations.
- Parameters
key – Either a scalar or list that indexes the internal object for sub-plotting. If an overall visualization is requested, pass None.
- Returns
A Plotly figure, html as string, or a Dash component.
Show¶
The show method is used as a universal function that provides visualizations for whatever explanation(s) is provided in its arguments. Implementation-wise it will provide some visualization platform (i.e. a dashboard or widget) and expose the explanation(s)’ visualizations as given by the .visualize call.
-
class
interpret.show(explanation, key=- 1, **kwargs)¶ Provides an interactive visualization for a given explanation(s).
By default, visualization provided is not preserved when the notebook exits.
- Parameters
explanation – Either a scalar Explanation or list of Explanations to render as visualization.
key – Specific index of explanation to visualize.
**kwargs – Kwargs passed down to provider’s render() call.
- Returns
None.